Goto

Collaborating Authors

 human extinction


The Strange Disappearance of an Anti-AI Activist

The Atlantic - Technology

Sam Kirchner wants to save the world from artificial superintelligence. He's been missing for two weeks. B efore Sam Kirchner vanished, before the San Francisco Police Department began to warn that he could be armed and dangerous, before OpenAI locked down its offices over the potential threat, those who encountered him saw him as an ordinary, if ardent, activist. Phoebe Thomas Sorgen met Kirchner a few months ago at Travis Air Force Base, northeast of San Francisco, at a protest against immigration policy and U.S. military aid to Israel. Sorgen, a longtime activist whose first protests were against the Vietnam War, was going to block an entrance to the base with six other older women. Kirchner, 27 years old, was there with a couple of other members of a new group called Stop AI, and they all agreed to go along to record video on their phones in case of a confrontation with the police.


Who finds dad jokes funniest? The answer might not astonish you

New Scientist

Who finds dad jokes funniest? Feedback had a birthday within the past 12 months, and Feedback Jr gave us a card that read: "My ambition in life is to be as funny as you think you are." Still, we persist with our dad jokes, if only because our offspring's exasperated reactions are so much fun. So we were delighted to learn that two psychologists, Paul Silvia and Meriel Burnett, have taken a scholarly interest in dad jokes. They have written an entire paper on the topic.


The Paradox of Doom: Acknowledging Extinction Risk Reduces the Incentive to Prevent It

Growiec, Jakub, Prettner, Klaus

arXiv.org Artificial Intelligence

We investigate the salience of extinction risk as a source of impatience. Our framework distinguishes between human extinction risk and individual mortality risk while allowing for various degrees of intergenerational altruism. Additionally, we consider the evolutionarily motivated "selfish gene" perspective. We find that the risk of human extinction is an indispensable component of the discount rate, whereas individual mortality risk can be hedged against - partially or fully, depending on the setup - through human reproduction. Overall, we show that in the face of extinction risk, people become more impatient rather than more farsighted. Thus, the greater the threat of extinction, the less incentive there is to invest in avoiding it. Our framework can help explain why humanity consistently underinvests in mitigation of catastrophic risks, ranging from climate change mitigation, via pandemic prevention, to addressing the emerging risks of transformative artificial intelligence.


A Taxonomy of Omnicidal Futures Involving Artificial Intelligence

Critch, Andrew, Tsimerman, Jacob

arXiv.org Artificial Intelligence

This report presents a taxonomy and examples of potential omnicidal events resulting from AI: scenarios where all or almost all humans are killed. These events are not presented as inevitable, but as possibilities that we can work to avoid. Insofar as large institutions require a degree of public support in order to take certain actions, we hope that by presenting these possibilities in public, we can help to support preventive measures against catastrophic risks from AI.


The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI

Growiec, Jakub, Prettner, Klaus

arXiv.org Artificial Intelligence

Recent advances in artificial intelligence (AI) have led to a diverse set of predictions about its long-term impact on humanity. A central focus is the potential emergence of transformative AI (TAI), eventually capable of outperforming humans in all economically valuable tasks and fully automating labor. Discussed scenarios range from human extinction after a misaligned TAI takes over ("AI doom") to unprecedented economic growth and abundance ("post-scarcity"). However, the probabilities and implications of these scenarios remain highly uncertain. Here, we organize the various scenarios and evaluate their associated existential risks and economic outcomes in terms of aggregate welfare. Our analysis shows that even low-probability catastrophic outcomes justify large investments in AI safety and alignment research. We find that the optimizing representative individual would rationally allocate substantial resources to mitigate extinction risk; in some cases, she would prefer not to develop TAI at all. This result highlights that current global efforts in AI safety and alignment research are vastly insufficient relative to the scale and urgency of existential risks posed by TAI. Our findings therefore underscore the need for stronger safeguards to balance the potential economic benefits of TAI with the prevention of irreversible harm. Addressing these risks is crucial for steering technological progress toward sustainable human prosperity.


There Is a Solution to AI's Existential Risk Problem

TIME - Tech

Technological progress can excite us, politics can infuriate us, and wars can mobilize us. But faced with the risk of human extinction that the rise of artificial intelligence is causing, we have remained surprisingly passive. In part, perhaps this was because there did not seem to be a solution. This is an idea I would like to challenge. Since the release of ChatGPT two years ago, hundreds of billions of dollars have poured into AI.


How Commerce Secretary Gina Raimondo Became America's Point Woman on AI

TIME - Tech

Until mid-2023, artificial intelligence was something of a niche topic in Washington, largely confined to small circles of tech-policy wonks. That all changed when, nearly two years into Gina Raimondo's tenure as Secretary of Commerce, ChatGPT's explosive popularity catapulted AI into the spotlight. Raimondo, however, was ahead of the curve. "I make it my business to stay on top of all of this," she says during an interview in her wood-paneled office overlooking the National Mall on May 21. "None of it was shocking to me." But in the year since, even she has been startled by the pace of progress.


Why Protesters Around the World Are Demanding a Pause on AI Development

TIME - Tech

Just one week before the world's second-ever global summit on artificial intelligence, protesters of a small but growing movement called "Pause AI" demanded that the world's governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday. In London, a group of 20 or so protesters stood outside of the U.K.'s Department of Science, Innovation and Technology chanting things like "stop the race, it's not safe" and "who's future? The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI's Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world. "[AI companies] have proven time and time again… through the way that these companies' workers are treated, with the way that they treat other people's work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted," said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. "I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically," she says. "I love writing personally… I've really loved it.


Extinction Risks from AI: Invisible to Science?

Kovarik, Vojtech, van Merwijk, Christian, Mattsson, Ida

arXiv.org Artificial Intelligence

In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart's Law as "Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity", and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart's Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be informative for evaluating specific arguments for Extinction-level Goodhart's Law. Since each of the conditions seems to significantly contribute to the complexity of the resulting model, formally evaluating the hypothesis might be exceedingly difficult. This raises the possibility that whether the risk of extinction from artificial intelligence is real or not, the underlying dynamics might be invisible to current scientific methods.


Thousands of AI Authors on the Future of AI

Grace, Katja, Stewart, Harlan, Sandkühler, Julia Fabienne, Thomas, Stephen, Weinstein-Raun, Ben, Brauner, Jan

arXiv.org Artificial Intelligence

In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.